Greg Detre
Monday, February 24, 2003
he characterises the chinese room in terms of a processor and a set of
rules, i.e. a von neumann machine - does it help that brains don't
differentiate between processing and rules??? almost certainly not relevant,
since that's only at the level of implementation
could you argue anyway that the neural network has the processing in the
activations, and the memory in the weights ...???
why is the brain
connectionist, if the algorithms it's implementing are so much higher level???
i know that we're talking about separate levels, of implementation and of
algorithms or function, but wouldn't you expect there to be more
neuroscientific evidence to support some of the high-level AI theories that you
have put forward???
how many of the main
theories of humour has Minsky embraced???
humour as surprise - yes, in that you're sneaking up on your censors by
employing a second frame that it doesn't notice, right???
superiority - yes, because you're often using it as a mechanism for
learning (often through social or self-rebuke)
relief - no, not really???
incongruity - is this the same as surprise???
do we have a separate
logic reasoning agent, or is it simply a version of uniframing etc. that happens
to work with more abstract, algebraic symbols???
do we have a
uniframing agent, or is it a method that all agents employ???
does it make sense to
talk of different types of agents??? are k-lines, scripts, isonomes etc. all
types of agents???
do you really think
that entire, or even large portions of, old agents remain in our brain dormant
to this day???
yes, he does
will it ever be
possible to tease apart the distributed connectionist representation of the
brain to see different agents and realms???
why does he think we
have emotions??? how did they relate to learning??? did he link them to
intellect in the same way as damasio???
how do you explain the
so-called 'g-factor' noticed in almost all IQ tests, where children tend on
average to cluster towards the high or low end in all their abilities, rather
than their performance in different domains being almost independent of each
other as you might expect if our agents are so separate???
i think he he can easily argue that he's shown how children who
see other lecture file
learn good learning
strategies are going to do better overall in pretty much every area. moreover,
he's also shown how connected and inter-dependent the different agents are
would you expect to
see agent-level information in the genome at all???
he seems to imply not (pg 310)
why is our STM so
small???
could we build a
system now that would learn and benefit from these number-meaning ideas??? if
not, isn't that an indictment of this sort of free-wheeling speculation???
don't you need to design (or even reverse-engineer) with a task in mind???
could we actually
teach maths any other way??? wouldn't the formal maths be harder to learn at a
later stage???
isn't the polyneme
approach wasteful of memory??? (see 19.5)
not really, if the properties each agent memorises don't overlap much
is this more than a
semantic network cos multimodal/sensory info is involved???
could the
level/spiralling control be modified like the 'temperature'??? (see 20.6)
memorisers vs
recognisers???
can�t account for why Piaget�s discoveries were so late � did everyone just agree that children were little people???
do you feel there�s enough talk of learning in the SoM???
think about credit assignment
why are some thoughts
harder than others???
what are you up to now???
how successful have implementations of the SoM been???
is there anything at all to be said about the types of computation that the mind performs as being special??? if not, are you not then committed to panpsychism???
what role does context play in common sense???
do you ever think about alternative forms of language???
how would you represent space???
is there a difference between learning language as an individual and creating a language as a community???
why is language serial???
do you ever worry about G��s theorem???
< pg 72 of 172 in AI manifesto
the important thing about language is the ability to make a parallel structure serial. we already have representations, even if not a lexicon, so it�s not simply the ability to label things that helps elevate man above animal
does that mean that if we had parallel languages, they�d lose their power???
he thinks
that we don�t do 3D spatial reasoning, only 2D, except by tricks
are there psych experiments to undermine this???
Minsky: is
there a single good idea in Spielberg�s AI?
I�m not sure that I agree that all the old bits of self hang on � we overwrite them
how do prosopagnosics learn to read the emotion in people�s voices according to Meltzoff + Gopnik (curious machines)???
where do k-lines fit into tEM???
would a
common sense machine need senses and motor system???
Minsky: well, there�s Helen Keller�
what�s his argument to the chaos theory attractor objection in ch 1???
if consciousness is
subtractive, doesn't that leave you with panpsychism???
don't you think that to some degree, the brain doesn't really even know what it's doing or how it works, so maybe to reproduce some of its functionality as AI researchers, we don't have to either???
perhaps we do need to at some very high and very low levels, but not at a medium level
why do we make mistakes???